78 research outputs found

    Dealing with natural language interfaces in a geolocation context

    Full text link
    In the geolocation field where high-level programs and low-level devices coexist, it is often difficult to find a friendly user inter- face to configure all the parameters. The challenge addressed in this paper is to propose intuitive and simple, thus natural lan- guage interfaces to interact with low-level devices. Such inter- faces contain natural language processing and fuzzy represen- tations of words that facilitate the elicitation of business-level objectives in our context

    Non-functional Data Collection for Adaptive Business Processes and Decision Making

    Get PDF
    International audienceMonitoring application services becomes more and more a transverse key activity in SOA. Beyond traditional human system administration and load control, new activities such as autonomic management as well as SLA enforcement raise the stakes over monitoring requirements. In this paper, we address a new monitoring-based activity which is selecting among competitive service offers based on their currently measured QoS. Starting from this use case, the late binding of service calls in SOA given the current QoS of a set of candidate services, we first elicit the requirements and then describe M4ABP (Monitoring for Adaptive Business Process), a middleware component for monitoring services and delivering monitoring data to business processes wishing to call them. M4ABP provides solutions for general requirements: flexibility as well as performance in data access for clients, coherency of data sets and network usage optimization. Lessons learned from this first use case can be applied to similar monitoring scenario, as well as to the larger field of context-aware computing

    Requirements of the SALTY project

    Get PDF
    This document is the first external deliverable of the SALTY project (Self-Adaptive very Large disTributed sYstems), funded by the ANR under contract ANR-09-SEGI-012. It is the result of task 1.1 of the Work Package (WP) 1 : Requirements and Architecture. Its objective is to identify and collect requirements from use cases that are going to be developed in WP 4 (Use cases and Validation). Based on the study and classification of the use cases, requirements against the envisaged framework are then determined and organized in features. These features will aim at guide and control the advances in all work packages of the project. As a start, features are classified, briefly described and related scenarios in the defined use cases are pinpointed. In the following tasks and deliverables, these features will facilitate design by assigning priorities to them and defining success criteria at a finer grain as the project progresses. This report, as the first external document, has no dependency to any other external documents and serves as a reference to future external documents. As it has been built from the use cases studies that have been synthesized in two internal documents of the project, extracts from the two documents are made available as appendices (cf. appen- dices B and C)

    Données et responsabilité du point de vue de l'informaticien

    No full text
    International audienc

    Logiques et Droit

    No full text
    International audienc

    My meetings with Da

    No full text
    International audienc

    Données et responsabilité du point de vue de l'informaticien

    No full text
    International audienc

    Comparison and links between two 2-tuple linguistic models for decision making

    No full text
    International audienceThis paper deals with linguistic models that may prove useful for representing the information during decision making.Data extraction is a complex problem, especially when dealing with information coming from human beings (linguistic assertions, preferences, feelings, etc.) and several models have emerged to overcome difficulties in expressing the data. Among those models, we are interested in two of them: the 2-tuple semantic model and the 2-tuple symbolic model.In this paper we stress on a comparison between both models and we prove that links can be made between them. An interesting result is obtained: the 2-tuple semantic model can generate a partitioning that is identical to the one that would be generated thanks to the symbolic 2-tuple model. This permits to compare the models and mix the two when there is a need to use one model or another, depending on the applications, and then to reach a consensus. In closing, an example of the use is given to demonstrate the value of the method

    Calculs Ă  l'aide de mots : vers un emploi de termes linguistiques de bout en bout dans la chaĂźne du raisonnement

    No full text
    Computing with Words, although directly issued from the fuzzy set theory that is now more than 40 years old, is still a widely open research domain. Indeed, one of the dreams of the human beings for a very long time is to create really smart machines. The big passion for this research axis has gone down since the failure of Artificial Intelligence (AI) in the seventies or eighties, notably shown by Hubert Dreyfus’ book published in 1972 and entitled What Computers Can’t Do, and we now know that the robot as an “exact copy of the human being” won’t be happening in a hurry. But, to help humans in their decision making and to simulate their reasoning selectively in some specific and well-defined contexts, is a field that has proven its feasibility and efficiency, and has thus aroused again the interest. The fuzzy set theory then the CW with its various representations of linguistic data, has widely contributed to relaunch this part of AI.In this report, we see how the CW can get involved in this field at different levels. In a practical point of view, many applications had and still can have the benefit of linguistic approaches. Let us cite for the record — and to cite only what we personally tackled — colorimetrics, performing arts and autonomic computing fields into which we have experimented and obtained satisfactory results already.In a theoretical point of view, we consider two axes:– the one that consists in formalizing (to generalize and so, to reuse) new concepts from the introduction of linguistic approaches in applications (for instance the formalization of the LCP-nets), i.e. the axis that feeds on applications,– and the one that consists in setting up existing theoretical concepts better (for instance, to imagine a generalized modus ponens, implications, t-norms, t-conorms, etc. dedicated to linguistic 2-tuples) or to imagine new ones (for instance, to imagine a unification of several linguistic models via a vectorial representation), i.e. the axis that feeds applications.So we show the articulation between theory and practice around the linguistic approaches for the modeling and reasoning under uncertainty, one and the other feeding on each other.From the perceptual computing point of view, we have shown that it can benefit from the four following axes that remain objectives at least partially:– the setting of end-to-end linguistic processes,– the use of various representation models — such as the fuzzy subsets, the linguistic 2-tuplesfrom Herrera & Martínez, proportional 2-tuples from Wang & Hao or linguistic degrees from Truck & Akdag — that allow us to catch the nuances and the results of computation using linguistic definition domains whose cardinality is bounded and close to the user perception,– the setting of linguistic tools to go beyond the well-known fuzzy inference, see for example the LCP-nets with their way to deal with conditional preferences, and– the integration of these tools inside knowledge elicitation processes and the handling of these processes to compose them in order to permit more complex reasoning processes, such as the joint inference in the case of collaborative decision-making.Le contexte gĂ©nĂ©ral de mes travaux de recherche se situe dans le domaine de la reprĂ©sentation, modĂ©lisation puis combinaison des connaissances en milieu imprĂ©cis et vague, en particulier dans la thĂ©orie des sous-ensembles flous, d'une part, et dans la thĂ©orie des multi-ensembles, d'autre part. Dans ce contexte, nous nous attachons Ă  concevoir des modĂšles ou outils linguistiques pour palier les problĂšmes d'imprĂ©cision. Notre travail s'est d'abord fondĂ© sur la modulation, prĂ©servant la simplicitĂ© d'un espace de variables linguistiques de cardinalitĂ© assez faible mais sur lequel un large Ă©ventail de nuances peut ĂȘtre appliquĂ©. Puis nous avons utilisĂ© ces modĂšles et outils dans des contextes oĂč l'expression de nuances est trĂšs importante, comme le *perceptual computing* dans le domaine de la classification des couleurs ou encore le jeu thĂ©Ăątral, et, plus rĂ©cemment, pour la capture des intentions des programmeurs pour Ă©tablir des politiques de contrĂŽle et d'adaptation dynamique des architectures logicielles selon l'approche du calcul auto-rĂ©gulĂ© (*autonomic computing*)
    • 

    corecore